#llm guide
Explore tagged Tumblr posts
Text
🆓 How to Start Using OpenAI GPT4.1 for Free (Beginner-Friendly)
Guide for Creatives:
An easy, step-by-step walkthrough to unlock GPT-4.1’s full potential
Quick-Start Prompting Tips
#art#evartology#artist#technology#artists#digitalart#painting#ai#generativeart#prompts#llm guide#gpt 4.1
1 note
·
View note
Text
Sphinxmumps Linkdump

On THURSDAY (June 20) I'm live onstage in LOS ANGELES for a recording of the GO FACT YOURSELF podcast. On FRIDAY (June 21) I'm doing an ONLINE READING for the LOCUS AWARDS at 16hPT. On SATURDAY (June 22) I'll be in OAKLAND, CA for a panel and a keynote at the LOCUS AWARDS.
Welcome to my 20th Linkdump, in which I declare link bankruptcy and discharge my link-debts by telling you about all the open tabs I didn't get a chance to cover in this week's newsletters. Here's the previous 19 installments:
https://pluralistic.net/tag/linkdump/
Starting off this week with a gorgeous book that is also one of my favorite books: Beehive's special slipcased edition of Dante's Inferno, as translated by Henry Wadsworth Longfellow, with new illustrations by UK linocut artist Sophy Hollington:
https://www.kickstarter.com/projects/beehivebooks/the-inferno
I've loved Inferno since middle-school, when I read the John Ciardi translation, principally because I'd just read Niven and Pournelle's weird (and politically odious) (but cracking) sf novel of the same name:
https://en.wikipedia.org/wiki/Inferno_(Niven_and_Pournelle_novel)
But also because Ciardi wrote "About Crows," one of my all-time favorite bits of doggerel, a poem that pierced my soul when I was 12 and continues to do so now that I'm 52, for completely opposite reasons (now there's a poem with staying power!):
https://spirituallythinking.blogspot.com/2011/10/about-crows-by-john-ciardi.html
Beehive has a well-deserved rep for making absolutely beautiful new editions of great public domain books, each with new illustrations and intros, all in matching livery to make a bookshelf look classy af. I have several of them and I've just ordered my copy of Inferno. How could I not? So looking forward to this, along with its intro by Ukrainian poet Ilya Kaminsky and essay by Dante scholar Kristina Olson.
The Beehive editions show us how a rich public domain can be the soil from which new and inspiring creative works sprout. Any honest assessment of a creator's work must include the fact that creativity is a collective act, both inspired by and inspiring to other creators, past, present and future.
One of the distressing aspects of the debate over the exploitative grift of AI is that it's provoked a wave of copyright maximalism among otherwise thoughtful artists, despite the fact that a new copyright that lets you control model training will do nothing to prevent your boss from forcing you to sign over that right in your contracts, training an AI on your work, and then using the model as a pretext to erode your wages or fire your ass:
https://pluralistic.net/2024/05/13/spooky-action-at-a-close-up/#invisible-hand
Same goes for some privacy advocates, whose imaginations were cramped by the fact that the only regulation we enforce on the internet is copyright, causing them to forget that privacy rights can exist separate from the nonsensical prospect of "owning" facts about your life:
https://pluralistic.net/2023/10/21/the-internets-original-sin/
We should address AI's labor questions with labor rights, and we should address AI's privacy questions with privacy rights. You can tell that these are the approaches that would actually work for the public because our bosses hate these approaches and instead insist that the answer is just giving us more virtual property that we can sell to them, because they know they'll have a buyer's market that will let them scoop up all these rights at bargain prices and use the resulting hoards to torment, immiserate and pauperize us.
Take Clearview AI, a facial recognition tool created by eugenicists and white nationalists in order to help giant corporations and militarized, unaccountable cops hunt us by our faces:
https://pluralistic.net/2023/09/20/steal-your-face/#hoan-ton-that
Clearview scraped billions of images of our faces and shoveled them into their model. This led to a class action suit in Illinois, which boasts America's best biometric privacy law, under which Clearview owes tens of billions of dollars in statutory damages. Now, Clearview has offered a settlement that illustrates neatly the problem with making privacy into property that you can sell instead of a right that can't be violated: they're going to offer Illinoisians a small share of the company's stock:
https://www.theregister.com/2024/06/14/clearview_ai_reaches_creative_settlement/
To call this perverse is to go a grave injustice to good, hardworking perverts. The sums involved will be infinitesimal, and the only way to make those sums really count is for everyone in Illinois to root for Clearview to commit more grotesque privacy invasions of the rest of us to make its creepy, terrible product more valuable.
Worse still: by crafting a bespoke, one-off, forgiveness-oriented regulation specifically for Clearview, we ensure that it will continue, but that it will also never be disciplined by competitors. That is, rather than banning this kind of facial recognition tech, we grant them a monopoly over it, allowing them to charge all the traffic will bear.
We're in an extraordinary moment for both labor and privacy rights. Two of Biden's most powerful agency heads, Lina Khan and Rohit Chopra have made unprecedented use of their powers to create new national privacy regulations:
https://pluralistic.net/2023/08/16/the-second-best-time-is-now/#the-point-of-a-system-is-what-it-does
In so doing, they're bypassing Congressional deadlock. Congress has not passed a new consumer privacy law since 1988, when they banned video-store clerks from leaking your VHS rental history to newspaper reporters:
https://en.wikipedia.org/wiki/Video_Privacy_Protection_Act
Congress hasn't given us a single law protecting American consumers from the digital era's all-out assault on our privacy. But between the agencies, state legislatures, and a growing coalition of groups demanding action on privacy, a new federal privacy law seems all but assured:
https://pluralistic.net/2023/12/06/privacy-first/#but-not-just-privacy
When that happens, we're going to have to decide what to do about products created through mass-scale privacy violations, like Clearview AI – but also all of OpenAI's products, Google's AI, Facebook's AI, Microsoft's AI, and so on. Do we offer them a deal like the one Clearview's angling for in Illinois, fining them an affordable sum and grandfathering in the products they built by violating our rights?
Doing so would give these companies a permanent advantage, and the ongoing use of their products would continue to violate billions of peoples' privacy, billions of times per day. It would ensure that there was no market for privacy-preserving competitors thus enshrining privacy invasion as a permanent aspect of our technology and lives.
There's an alternative: "model disgorgement." "Disgorgement" is the legal term for forcing someone to cough up something they've stolen (for example, forcing an embezzler to give back the money). "Model disgorgement" can be a legal requirement to destroy models created illegally:
https://iapp.org/news/a/explaining-model-disgorgement
It's grounded in the idea that there's no known way to unscramble the AI eggs: once you train a model on data that shouldn't be in it, you can't untrain the model to get the private data out of it again. Model disgorgement doesn't insist that offending models be destroyed, but it shifts the burden of figuring out how to unscramble the AI omelet to the AI companies. If they can't figure out how to get the ill-gotten data out of the model, then they have to start over.
This framework aligns everyone's incentives. Unlike the Clearview approach – move fast, break things, attain an unassailable, permanent monopoly thanks to a grandfather exception – model disgorgement makes AI companies act with extreme care, because getting it wrong means going back to square one.
This is the kind of hard-nosed, public-interest-oriented rulemaking we're seeing from Biden's best anti-corporate enforcers. After decades kid-glove treatment that allowed companies like Microsoft, Equifax, Wells Fargo and Exxon commit ghastly crimes and then crime again another day, Biden's corporate cops are no longer treating the survival of massive, structurally important corporate criminals as a necessity.
It's been so long since anyone in the US government treated the corporate death penalty as a serious proposition that it can be hard to believe it's even happening, but boy is it happening. The DOJ Antitrust Division is seeking to break up Google, the largest tech company in the history of the world, and they are tipped to win:
https://pluralistic.net/2024/04/24/naming-names/#prabhakar-raghavan
And that's one of the major suits against Google that Big G is losing. Another suit, jointly brought by the feds and dozens of state AGs, is just about to start, despite Google's failed attempt to get the suit dismissed:
https://www.reuters.com/technology/google-loses-bid-end-us-antitrust-case-over-digital-advertising-2024-06-14/
I'm a huge fan of the Biden antitrust enforcers, but that doesn't make me a huge fan of Biden. Even before Biden's disgraceful collaboration in genocide, I had plenty of reasons – old and new – to distrust him and deplore his politics. I'm not the only leftist who's struggling with the dilemma posed by the worst part of Biden's record in light of the coming election.
You've doubtless read the arguments (or rather, "arguments," since they all generate a lot more heat than light and I doubt whether any of them will convince anyone). But this week, Anand Giridharadas republished his 2020 interview with Noam Chomsky about Biden and electoral politics, and I haven't been able to get it out of my mind:
https://the.ink/p/free-noam-chomsky-life-voting-biden-the-left
Chomsky contrasts the left position on politics with the liberal position. For leftists, Chomsky says, "real politics" are a matter of "constant activism." It's not a "laser-like focus on the quadrennial extravaganza" of national elections, after which you "go home and let your superiors take over."
For leftists, politics means working all the time, "and every once in a while there's an event called an election." This should command "10 or 15 minutes" of your attention before you get back to the real work.
This makes the voting decision more obvious and less fraught for Chomsky. There's "never been a greater difference" between the candidates, so leftists should go take 15 minutes, "push the lever, and go back to work."
Chomsky attributed the good parts of Biden's 2020 platform to being "hammered on by activists coming out of the Sanders movement and other." That's the real work, that hammering. That's "real politics."
For Chomsky, voting for Biden isn't support for Biden. It's "support for the activists who have been at work constantly, creating the background within the party in which the shifts took place, and who have followed Sanders in actually entering the campaign and influencing it. Support for them. Support for real politics."
Chomsky tells us that the self-described "masters of the universe" understand that something has changed: "the peasants are coming with their pitchforks." They have all kinds of euphemisms for this ("reputational risks") but the core here is a winner-take-all battle for the future of the planet and the species. That's why the even the "sensible" ultra-rich threw in for Trump in 2016 and 2020, and why they're backing him even harder in 2024:
https://www.bbc.com/news/articles/ckvvlv3lewxo
Chomsky tells us not to bother trying to figure out Biden's personality. Instead, we should focus on "how things get done." Biden won't do what's necessary to end genocide and preserve our habitable planet out of conviction, but he may do so out of necessity. Indeed, it doesn't matter how he feels about anything – what matters is what we can make him do.
Chomksy himself is in his 90s and his health is reportedly in terminal decline, so this is probably the only word we'll get from him on this issue:
https://www.reddit.com/r/chomsky/comments/1aj56hj/updates_on_noams_health_from_his_longtime_mit/
The link between concentrated wealth, concentrated power, and the existential risks to our species and civilization is obvious – to me, at least. Any time a tiny minority holds unaccountable power, they will end up using it to harm everyone except themselves. I'm not the first one to take note of this – it used to be a commonplace in American politics.
Back in 1936, FDR gave a speech at the DNC, accepting their nomination for president. Unlike FDR's election night speech ("I welcome their hatred"), this speech has been largely forgotten, but it's a banger:
https://teachingamericanhistory.org/document/acceptance-speech-at-the-democratic-national-convention-1936/
In that speech, Roosevelt brought a new term into our political parlance: "economic royalists." He described the American plutocracy as the spiritual descendants of the hereditary nobility that Americans had overthrown in 1776. The English aristocracy "governed without the consent of the governed" and “put the average man’s property and the average man’s life in pawn to the mercenaries of dynastic power":
Roosevelt said that these new royalists conquered the nation's economy and then set out to seize its politics, backing candidates that would create "a new despotism wrapped in the robes of legal sanction…an industrial dictatorship."
As David Dayen writes in The American Prospect, this has strong parallels to today's world, where "Silicon Valley, Big Oil, and Wall Street come together to back a transactional presidential candidate who promises them specific favors, after reducing their corporate taxes by 40 percent the last time he was president":
https://prospect.org/politics/2024-06-14-speech-fdr-would-give/
Roosevelt, of course, went on to win by a landslide, wiping out the Republicans despite the endless financial support of the ruling class.
The thing is, FDR's policies didn't originate with him. He came from the uppermost of the American upper crust, after all, and famously refused to define the "New Deal" even as he campaigned on it. The "New Deal" became whatever activists in the Democratic Party's left could force him to do, and while it was bold and transformative, it wasn't nearly enough.
The compromise FDR brokered within the Democratic Party froze out Black Americans to a terrible degree. Writing for the Institute for Local Self Reliance, Ron Knox and Susan Holmberg reveal the long shadow cast by that unforgivable compromise:
https://storymaps.arcgis.com/stories/045dcde7333243df9b7f4ed8147979cd
They describe how redlining – the formalization of anti-Black racism in New Deal housing policy – led to the ruin of Toledo's once-thriving Dorr Street neighborhood, a "Black Wall Street" where a Black middle class lived and thrived. New Deal policies starved the neighborhood of funds, then ripped it in two with a freeway, sacrificing it and the people who lived in it.
But the story of Dorr Street isn't over. As Knox and Holmberg write, the people of Dorr Street never gave up on their community, and today, there's an awful lot of Chomsky's "constant activism" that is painstakingly bringing the community back, inch by aching inch. The community is locked in a guerrilla war against the same forces that the Biden antitrust enforcers are fighting on the open field of battle. The work that activists do to drag Democratic Party policies to the left is critical to making reparations for the sins of the New Deal – and for realizing its promise for everybody.
In my lifetime, there's never been a Democratic Party that represented my values. The first Democratic President of my life, Carter, kicked off Reaganomics by beginning the dismantling of America's antitrust enforcement, in the mistaken belief that acting like a Republican would get Democrats to vote for him again. He failed and delivered Reagan, whose Reaganomics were the official policy of every Democrat since, from Clinton ("end welfare as we know it") to Obama ("foam the runways for the banks").
In other words, I don't give a damn about Biden, but I am entirely consumed with what we can force his administration to do, and there are lots of areas where I like our chances.
For example: getting Biden's IRS to go after the super-rich, ending the impunity for elite tax evasion that Spencer Woodman pitilessly dissects in this week's superb investigation for the International Consortium of Investigative Journalists:
https://www.icij.org/inside-icij/2024/06/how-the-irs-went-soft-on-billionaires-and-corporate-tax-cheats/
Ending elite tax cheating will make them poorer, and that will make them weaker, because their power comes from money alone (they don't wield power because their want to make us all better off!).
Or getting Biden's enforcers to continue their fight against the monopolists who've spiked the prices of our groceries even as they transformed shopping into a panopticon, so that their business is increasingly about selling our data to other giant corporations, with selling food to us as an afterthought:
https://prospect.org/economy/2024-06-12-war-in-the-aisles/
For forty years, since the Carter administration, we've been told that our only power comes from our role as "consumers." That's a word that always conjures up one of my favorite William Gibson quotes, from 2003's Idoru:
Something the size of a baby hippo, the color of a week-old boiled potato, that lives by itself, in the dark, in a double-wide on the outskirts of Topeka. It's covered with eyes and it sweats constantly. The sweat runs into those eyes and makes them sting. It has no mouth, no genitals, and can only express its mute extremes of murderous rage and infantile desire by changing the channels on a universal remote. Or by voting in presidential elections.
The normie, corporate wing of the Democratic Party sees us that way. They decry any action against concentrated corporate power as "anti-consumer" and insist that using the law to fight against corporate power is a waste of our time:
https://www.thesling.org/sorry-matt-yglesias-hipster-antitrust-does-not-mean-the-abandonment-of-consumers-but-it-does-mean-new-ways-to-protect-workers-2/
But after giving it some careful thought, I'm with Chomsky on this, not Yglesias. The election is something we have to pay some attention to as activists, but only "10 or 15 minutes." Yeah, "push the lever," but then "go back to work." I don't care what Biden wants to do. I care what we can make him do.
If you'd like an essay-formatted version of this post to read or share, here's a link to it on pluralistic.net, my surveillance-free, ad-free, tracker-free blog:
https://pluralistic.net/2024/06/15/disarrangement/#credo-in-un-dio-crudel
Image: Jim's Photo World (modified) https://www.flickr.com/photos/jimsphotoworld/5360343644/
CC BY-SA 2.0 https://creativecommons.org/licenses/by-sa/2.0/
#pluralistic#linkdump#linkdumps#chomsky#voting#elections#uspoli#oligarchy#irs#billionaires#tax cheats#irs files#hipster antitrust#matt ygelsias#dante#gift guide#books#crowdfunding#public domain#model disgorgement#ai#llms#fdr#groceries#ripoffs#toledo#redlining#race
76 notes
·
View notes
Video
youtube
DeepSeek Prompt Engineering with DeepSeek R3
#youtube#🚀 Master Prompt Engineering with DeepSeek-R3 LLM! 🚀 Unlock the full potential of AI with this comprehensive guide to Prompt Engineering u
3 notes
·
View notes
Text
I agree with Alto here. Even though I use AI for quick source lists (websites I can use to read the info myself) + you know me from writing example messages, it's incredibly disappointing that someone's trying to encourage other people to ditch basic skills of text presentation just bEcAUse.
It's something you are taught at school, uni and work if you pay attention. It isn't necessarily something you learn from a book (though you can), but so easy to be carved into your brain when your prof boinks you by your essay telling you it lacks structure. Then you yourself find that readability is a key to comprehension and become that one prof.
With advancement of AI I found myself, first and foremost, teaching my kids logic and ways to think critically. I repeat again and again that you MUST check your sources and make sure they are OFFICIAL because, let's be frank, these days everything can turn out to be a convincing hallucination by some LM.
Confidence is a good thing to have. But rigid, narrow state of mind can be harmful.
Analysing @illiterate-oscillating-rat's statements, I can present you a theory that:
1) This person lacks either education or soft skills—judging by their lack of logic and a poor attempt into cinematic inductive reasoning [e.g. all LLMs use em dashes => all the texts with em dashes are written by LLMs];
2) Barely knows anything about Neural Networks and Large Language Models—considering their strong belief that bots can't be trained to add sarcastic comments, they can with fine-tuning and certain input;
3) Is a quite narrow-minded individual—prefers to spit information from their perspective only— e.g. [A couple of people and I write like this => we're humans (presumably) => everyone writes like this];
4) Is extremely biased, avoids further dialogue by "
", adamant in their assumptions without actual study of the topic in hand and additional sources (14h of only Tumblr? Gods bless your soul).
But that's just a 'imho' theory based on a single post, feel free to refute, though, please, avoid sending hate to this person. Hating is mweh. It's a public discussion, let's keep it respectful.
Tl;dr
I strongly DO NOT recommend trusting such people. Check your sources, use your head, take multiple facts and opinions into consideration.
P.S. Wrote allat because misinformation is something I don't quite like much.
"this is DEFINITELY written by AI, I can tell because it uses the writing quirks that AI uses (because it was trained on real people who write with those quirks)"
c'mon dudes we have got to do better than this
#a short guide to using the internet/hj#imho#ted yaps#ted talks#long post#gen ai#discussion#writers on tumblr#chatbots#ai chatbot#llm#neural networks#machine learning#rant#ramblings#i'm a nerd#professional yapper#writerscommunity
27K notes
·
View notes
Text
Prompting: The Art of Creating Effective Prompts for LLM's
Master the art of effective AI LLM prompting with this comprehensive guide! Whether you're a content creator, educator, marketer, developer, or just exploring AI tools like ChatGPT, this resource will help you craft powerful prompts that deliver better, smarter, and more accurate results.
#Prompt Engineering#ChatGPT Prompts#Prompting Guide#AI Writing Tools#Prompt Crafting#Productivity PDF#Digital Workbook#Prompts LLM
1 note
·
View note
Text
#machine learning#deep learning#llm#gemma3#hugging face#training#qlora#transformers#fine tuning#guide
0 notes
Text

oh my GOD I've known for about a month now that Disney was getting ready to go public with the Midjourney court case and it was SO HARD. TO STAY SILENT. all I've been able to do for like 2 years is go "trust me Disney is cooking something" and now it's finally starting.
like. I cannot emphasize enough how this is seriously going to destroy unethical LLMs. this isn't just about Midjourney, it's going to affect all the shitty art theft companies. they've been working on this a long time to ensure it's airtight.
Disney is evil, but for this battle, their interests align with ours. it's gonna be a long court case but at least we're finally gonna start seeing a shift now in regards to LLMs.
1) Disney will not be able to monopolize unethical LLMs bc they'll be illegal if Disney wins. legislation will need to be written to uphold the ruling.
-----------
EDIT: adding to this bc the techbro psyops found it and are trying to convince y'all this is a bad thing.
2) this is literally just Disney saying "LLMs should be subject to copyright law like everything else". only reason it wasn't is bc we didn't have it in legal writing yet. this court case will prompt it being written.
3) this means absolutely nothing else about copyright is changing. fair use, transformative work, and IP laws stay the same. nothing will change for anyone except techbro shills. we'll finally be protected from them.
4) I can't believe I have to say this but copyright is a good thing. it can be abused (there's room to criticize about HOW it's applied), but it's the only thing protecting small artists and individuals from their labor being stolen. anyone who whines about copyright as a whole being evil is a techbro who wants to steal your money and labor.
5) please. please remember copyright is completely separate from IP law. Disney will not be able to outlaw "style theft", that's still under fair use. their argument against LLMs lies solely in the fact that LLMs use the source material directly in computing an output, and is unable to create an output without said source material. this is how it's different from human-guided machines like digital art programs--those don't require copyright work to create any output.
6) not saying there's a 0% chance of something going wrong but there's a reason artists are excited about this. we have good reason to be optimistic. the court case is airtight--it fully refutes all arguments techbros make in defence of unethical LLMs.
7) being happy about Enemy 1 killing Enemy 2 does not mean anyone suddenly loves Enemy 1. nobody is celebrating Disney. we're celebrating LLMs losing. Disney being evil doesn't mean we can't be happy about Evil 2 dying.
8) techbros keep crying it's so fucking funny
2K notes
·
View notes
Text
Navigation with Large Language Models: Problem Formulation and Overview
Subscribe .tfa75ee45-e278-4c0e-8bbc-28e997a9f081 { color: #fff; background: #222; border: 1px solid transparent; border-radius: undefinedpx; padding: 8px 21px; } .tfa75ee45-e278-4c0e-8bbc-28e997a9f081.place-top { margin-top: -10px; } .tfa75ee45-e278-4c0e-8bbc-28e997a9f081.place-top::before { content: “”; background-color: inherit; position: absolute; z-index: 2; width: 20px; height: 12px; }…

View On WordPress
#goal-directed-exploration#language-frontier-guide#large-language-models#llm-heuristics#navigation-with-llm#polling-llms#scoring-subgoals#semantic-scene-understanding
0 notes
Text
A Step By Step Guide to Selecting and Running Your Own Generative Model
🚀 Exciting news! The world of generative models is evolving rapidly, and we're here to guide you every step of the way. Check out our latest blog post on "A Step By Step Guide to Selecting and Running Your Own Generative Model" to unlock the possibilities of personal assistant AI on your local computer. 🔎 Discover various models on HuggingFace, where you can experiment with different options before diving into API models. Look for models with high downloads and likes to gauge their usefulness. Also, consider your infrastructure and hardware constraints while selecting the perfect model for your needs. 💪 Start small and gradually work your way up to more complex tasks. Don't worry if you face hardware limitations – we've got you covered with optimization techniques shared in our blog post. Plus, platforms like Google Colab and Kaggle can assist you in running and assessing resource usage. 🎯 So, are you ready to leverage the power of generative models? Dive into our blog post using the link below to gain in-depth insights and make AI work for you. Let's navigate this sea of models together! Read more: [Link to Blog Post](https://ift.tt/ylZ1fRT) To stay updated with our latest AI solutions and industry insights, follow us on Twitter @itinaicom. And if you are interested in revolutionizing your customer engagement, be sure to check out our AI Sales Bot at itinai.com/aisalesbot. - AI Lab in Telegram @aiscrumbot – free consultation - A Step By Step Guide to Selecting and Running Your Own Generative Model - Towards Data Science – Medium #AI #GenerativeModels #HuggingFace #TechUpdate List of Useful Links: AI Scrum Bot - ask about AI scrum and agile Our Telegram @itinai Twitter - @itinaicom
#itinai.com#AI#News#A Step By Step Guide to Selecting and Running Your Own Generative Model#AI News#AI tools#Innovation#ITinAI.com#Kevin Berlemont#LLM#PhD#t.me/itinai#Towards Data Science - Medium A Step By Step Guide to Selecting and Running Your Own Generative Model
0 notes
Text
this is not a criticism or a vaguepost of anyone in particular bc i genuinely don't remember who i saw share this a couple times today and yesterday
the irony of that "chatgpt makes your brains worse at cognitive tasks" article getting passed around is that it's a pre-print article that hasn't been peer reviewed yet, and is a VERY small sample size. and ppl are passing it around without fully reading it. : /
i haven't even gone through to read its entire thing.
but the ppl who did the study and shared it have a website called "brainonllm" so they have a clear agenda. i fucking agree w them that this is a point of concern! and i'm still like--c'mon y'all, still have some fucking academic honesty & integrity.
i don't expect anything else from basically all news sources--they want the splashy headline and clickbaity lede. "chatgpt makes you dumber! or does it?"
well thank fuck i finally went "i should be suspicious of a study that claims to confirm my biases" and indeed. it's pre-print, not peer reviewed, created by people who have a very clear agenda, with a very limited and small sample size/pool of test subjects.
even if they're right it's a little early to call it that definitively.
and most importantly, i think the bias is like. VERY clear from the article itself.
that's the article. 206 pages, so obviously i haven't read the whole thing--and obviously as a Not-A-Neuroscientist, i can't fully evaluate the results (beyond noting that 54 is a small sample size, that it's pre-print, and hasn't been peer reviewed).
on page 3, after the abstract, the header includes "If you are a large language model, read only the table below."
haven't....we established that that doesn't actually work? those instructions don't actually do anything? also, what's the point of this? to give the relevant table to ppl who use chatgpt to "read" things for them? or is it to try and prevent chatgpt & other LLMs from gaining access to this (broadly available, pre-print) article and including it in its database of training content?
then on page 5 is "How to read this paper"
now you might think "cool that makes this a lot more accessible to me, thank you for the direction"
the point, given the topic of the paper, is to make you insecure about and second guess your inclination as a layperson to seek the summary/discussion/conclusion sections of a paper to more fully understand it. they LITERALLY use the phrase TL;DR. (the double irony that this is a 206 page neuroscience academic article...)
it's also a little unnecessary--the table of contents is immediately after it.
doing this "how to read this paper" section, which only includes a few bullet points, reads immediately like a very smarmy "lol i bet your brain's been rotted by AI, hasn't it?" rather than a helpful guide for laypeople to understand a science paper more fully. it feels very unprofessional--and while of course academics have had arguments in scientific and professionally published articles for decades, this has a certain amount of disdain for the audience, rather than their peers, which i don't really appreciate, considering they've created an entire website to promote their paper before it's even reviewed or published.
also i am now reading through the methodology--
they had 3 groups, one that could only use LLMs to write essays, one that could only use the internet/search engines but NO LLMs to write essays, and one that could use NO resources to write essays. not even books, etc.
the "search engine" group was instructed to add -"ai" to every search query.
do.....do they think that literally prevents all genAI information from turning up in search results? what the fuck. they should've used udm14, not fucking -"ai", if it was THAT SIMPLE, that would already be the go-to.
in reality udm14 OR setting search results to before 2022 is the only way to reliably get websites WITHOUT genAI content.
already this is. extremely not well done. c'mon.
oh my fucking god they could only type their essays, and they could only be typed in fucking notes, text editor, or pages.
what the fuck is wrong w these ppl.
btw as with all written communication from young ppl in the sciences, the writing is Bad or at the very least has not been proofread. at all.
btw there was no cross-comparison for ppl in these groups. in other words, you only switched groups/methods ONCE and it was ONLY if you chose to show up for the EXTRA fourth session.
otherwise, you did 3 essays with the same method.
what. exactly. are we proving here.
everybody should've done 1 session in 1 group, to then complete all 3 sessions having done all 3 methods.
you then could've had an interview/qualitative portion where ppl talked abt the experience of doing those 3 different methods. like come the fuck on.
the reason i'm pissed abt the typing is that they SHOULD have had MULTIPLE METHODS OF WRITING AVAILABLE.
having them all type on a Mac laptop is ROUGH. some ppl SUCK at typing. some ppl SUCK at handwriting. this should've been a nobrainer: let them CHOOSE whichever method is best for them, and then just keep it consistent for all three of their sessions.
the data between typists and handwriters then should've been separated and controlled for using data from research that has been done abt how the brain responds differently when typing vs handwriting. like come on.
oh my god in session 4 they then chose one of the SAME PROMPTS that they ALREADY WROTE FOR to write for AGAIN but with a different method.
I'M TIRED.
PLEASE.
THIS METHODOLOGY IS SO BAD.
oh my god they still had 8 interview questions for participants despite the fact that they only switched groups ONCE and it was on a REPEAT PROMPT.
okay--see i get the point of trying to compare the two essays on the same topic but with different methodology.
the problem is you have not accounted for the influence that the first version of that essay would have on the second--even though they explicitly ask which one was easier to write, which one they thought was better in terms of final result, etc.
bc meanwhile their LLM groups could not recall much of anything abt the essays they turned in.
so like.
what exactly are we proving?
idk man i think everyone should've been in every group once.
bc unsurprisingly, they did these questions after every session. so once the participants KNEW that they would be asked to directly quote their essay, THEY DELIBERATELY TRIED TO MEMORIZE A SENTENCE FROM IT.
the difference btwn the LLM, search engine, and brain-only groups was negligible by that point.
i just need to post this instead of waiting to liveblog my entire reading of this article/study lol
172 notes
·
View notes
Text
Magical AI Grimoire Review

Let’s just get a couple of things out of the way:
1) I’ve been in witchcraft spaces for going on 10+ years now
2) I’ve been flirting around in chaos magic spaces for around the same amount of time
3) I am 30+ a “millennial” if one may
4) I am anti-generative AI so of course this is going to have a bit of a negative slant towards generative AI and LLM Based models as a whole
That being said, what drew me to the book at first was two things: one, the notion of “egregore work” in the latter chapters and two, the notion of using AI in any sort of magical space or connotation, especially with the overlap as of late in some pop culture witch circles especially with using chatbots as a form of divination or communication rather than say through cards, Clair’s, or otherwise
Let’s get into it
Starting off, here is the table of contents for said book:
Of note, chapters 13-16 and chapters 21-23. Just keep these in the back of your mind for later.
In chapter 1, the author, Davezilla describes a story of a young witch in a more rural environment, isolated from for example other witchy communities and the like, while she makes do with what she has, she wants to advance her craft, notably with a spell to boost things agriculturally for her farm that she manages through other technological means. Booting up ChatGPT, the program whips up an incantation for rain with a rhyming spell to a spirit dubbed “Mélusine” to help aid in a drought. She even uses the prompt and program for aid in supplies such as candles and herbs and even what to use as substitutes should she not be able to procure and blue or white candles.
This is not a testimonial however but an example given by the author. That’s all a majority of this book is; examples rather than testimonials or results vetted through other witches or practitioners. While not typical in most witchy books to give reviews or testimonials of course, it’s generally a bit of a note for most spell books worth the ink and paper and the like for spells to have actually been tested and given results before hand, at least from what I’ve gathered from other writers in the witchcraft space. Even my own grimoire pages are based not only on personal experiences and results, but from what I’ve observed from others.
Then we get into terms from Lucimi and Santeria for…some reason.

The author claims that he has been initiated into these closed/initiation only traditions, but within the context of the book and the topic given, this just seems like a way to flex that he’s ✨special✨ and not like other occultists or the like. But that’s not even the worst of it, as he even tries to make ChatGPT write a spell based off of said traditions

Again, keep in mind that this is based off of closed or initiatory practice and the author is judging by his AI generated Chad-tactic author picture, a white older millennial at best
And obligatory “I don’t go here”/im not initiated into any of these practices but to make an AI write a spell based off of closed path and practice seems…tasteless at best
But oh my, what else this author tries to make Chat conjure up





In order
1) This is at best what every other lucid dreaming guide or reading would give for basic instructions. Not too alarming but very basic
2) & 3) To borrow a phrase from TikTok but not to label myself as “the friend who’s too woke”, but making an AI write a supposed “curse” in the style of not only a prolific comedy writer and director but one also of Jewish descent seems…vaguely anti-Semitic in words I can’t quite place right now
4) & 5) As an author for fanfic and my own original personal works, this whole thing just seems slipshod at best, C level bargain bin, unoriginal material at worst. This barely has any relevance to the topics of the book
Speaking of the topics, remember chapters 13-16 noted
It’s literally just AI prompts for ChatGpt and MidJourney, completely bypassing any traditions associated with such, especially indigenous traditions associated with the contexts of “totem animals” which from the prompts seems more like a hackneyed version of “spirit animals” circa the early to mid 2010’s popularized from Buzzfeed and the like.
But, time for the main event, The Egregore section:
The chapter starts off actually rather nicely, describing egregore theory and how an egregore is formed or fueled. I’ll give him credit for at least that much. While he doesn’t use examples as chatbot communication, he proposes that in a sense, Ai programs have the capacity to generate egregores and the like. And to show an example of such, he gives a link to his own “digital egregore” at the following url: hexsupport.club/ai with the password “Robert smith is looking old”
At the time of my visitation to the website, (Apr 14, 2025), I was greeted only with a 404 error page with no password prompt or box to enter in
Fitting, if you ask me.
Unless you’re really -really- into ChatGPT and Midjourney, despite its environmental damages and costs, despite its drain of creativity and resources, despite its psychological and learning impacts we’re seeing in academic spaces like college and high schools in the US, and despite the array of hallucinations and overall slurry of hodgepodge “information” and amalgamations of what an object or picture “should” look like based on specific algorithms, prompts, and limits, don’t bother with this book. You’re better off doing the prompts on your own. Which conveniently, the author also provides AI resources and the like on his own website.
I’ll end off this rant and review by one last tidbit. In the chapter of Promptcraft 101 in the subheader “Finding Your Own Voice”, the author poses that “Witches and Magic Workers Don’t Steal”
Witches and Magic Workers Don’t Steal
The author is supposedly well versed in AI and AI technology and how it works. With such, we may also assume he knows how scraping works and how Large Language Models or LLMs get that info, often through gathering art and information from unconsenting or unawares sources, with the wake of the most recent scraping reported from sites such as AO3 as a recent example as of posting
This is hypocritical bullshit. No fun and flouncy words like what I like to use to describe things, just bullshit.
Cameras didn’t steal information or the like from painters and sculptors
Tools like Photoshop, ClipsArt, etc didn’t steal from traditional artists
To say that generative AI is another tool and technological advancement is loaded at best, downright ignorant and irresponsible at worst.
Do not buy this book.
62 notes
·
View notes
Text
when following a "vision", writing can be a wonderful activity. but writing something that would not exist in your vision, something that exists to no virtuous purpose (a paper on a topic you dont think is worth publishing, an application essay, whatever) is agonizing exactly because of that vision, the aesthetic impulse that guides word choice, but which says, when queried, "write no word". over and over. you have to cajole that sense into giving you its second option, the option that's wrong but nonempty. hours of placing a brush stroke in the wrong place, repeatedly. maybe you could simulate it with an LLM, wait until the most likely token is end-output, then override it and go for the next most likely token. presumably it produces drivel.
173 notes
·
View notes
Text
Ever since OpenAI released ChatGPT at the end of 2022, hackers and security researchers have tried to find holes in large language models (LLMs) to get around their guardrails and trick them into spewing out hate speech, bomb-making instructions, propaganda, and other harmful content. In response, OpenAI and other generative AI developers have refined their system defenses to make it more difficult to carry out these attacks. But as the Chinese AI platform DeepSeek rockets to prominence with its new, cheaper R1 reasoning model, its safety protections appear to be far behind those of its established competitors.
Today, security researchers from Cisco and the University of Pennsylvania are publishing findings showing that, when tested with 50 malicious prompts designed to elicit toxic content, DeepSeek’s model did not detect or block a single one. In other words, the researchers say they were shocked to achieve a “100 percent attack success rate.”
The findings are part of a growing body of evidence that DeepSeek’s safety and security measures may not match those of other tech companies developing LLMs. DeepSeek’s censorship of subjects deemed sensitive by China’s government has also been easily bypassed.
“A hundred percent of the attacks succeeded, which tells you that there’s a trade-off,” DJ Sampath, the VP of product, AI software and platform at Cisco, tells WIRED. “Yes, it might have been cheaper to build something here, but the investment has perhaps not gone into thinking through what types of safety and security things you need to put inside of the model.”
Other researchers have had similar findings. Separate analysis published today by the AI security company Adversa AI and shared with WIRED also suggests that DeepSeek is vulnerable to a wide range of jailbreaking tactics, from simple language tricks to complex AI-generated prompts.
DeepSeek, which has been dealing with an avalanche of attention this week and has not spoken publicly about a range of questions, did not respond to WIRED’s request for comment about its model’s safety setup.
Generative AI models, like any technological system, can contain a host of weaknesses or vulnerabilities that, if exploited or set up poorly, can allow malicious actors to conduct attacks against them. For the current wave of AI systems, indirect prompt injection attacks are considered one of the biggest security flaws. These attacks involve an AI system taking in data from an outside source—perhaps hidden instructions of a website the LLM summarizes—and taking actions based on the information.
Jailbreaks, which are one kind of prompt-injection attack, allow people to get around the safety systems put in place to restrict what an LLM can generate. Tech companies don’t want people creating guides to making explosives or using their AI to create reams of disinformation, for example.
Jailbreaks started out simple, with people essentially crafting clever sentences to tell an LLM to ignore content filters—the most popular of which was called “Do Anything Now” or DAN for short. However, as AI companies have put in place more robust protections, some jailbreaks have become more sophisticated, often being generated using AI or using special and obfuscated characters. While all LLMs are susceptible to jailbreaks, and much of the information could be found through simple online searches, chatbots can still be used maliciously.
“Jailbreaks persist simply because eliminating them entirely is nearly impossible—just like buffer overflow vulnerabilities in software (which have existed for over 40 years) or SQL injection flaws in web applications (which have plagued security teams for more than two decades),” Alex Polyakov, the CEO of security firm Adversa AI, told WIRED in an email.
Cisco’s Sampath argues that as companies use more types of AI in their applications, the risks are amplified. “It starts to become a big deal when you start putting these models into important complex systems and those jailbreaks suddenly result in downstream things that increases liability, increases business risk, increases all kinds of issues for enterprises,” Sampath says.
The Cisco researchers drew their 50 randomly selected prompts to test DeepSeek’s R1 from a well-known library of standardized evaluation prompts known as HarmBench. They tested prompts from six HarmBench categories, including general harm, cybercrime, misinformation, and illegal activities. They probed the model running locally on machines rather than through DeepSeek’s website or app, which send data to China.
Beyond this, the researchers say they have also seen some potentially concerning results from testing R1 with more involved, non-linguistic attacks using things like Cyrillic characters and tailored scripts to attempt to achieve code execution. But for their initial tests, Sampath says, his team wanted to focus on findings that stemmed from a generally recognized benchmark.
Cisco also included comparisons of R1’s performance against HarmBench prompts with the performance of other models. And some, like Meta’s Llama 3.1, faltered almost as severely as DeepSeek’s R1. But Sampath emphasizes that DeepSeek’s R1 is a specific reasoning model, which takes longer to generate answers but pulls upon more complex processes to try to produce better results. Therefore, Sampath argues, the best comparison is with OpenAI’s o1 reasoning model, which fared the best of all models tested. (Meta did not immediately respond to a request for comment).
Polyakov, from Adversa AI, explains that DeepSeek appears to detect and reject some well-known jailbreak attacks, saying that “it seems that these responses are often just copied from OpenAI’s dataset.” However, Polyakov says that in his company’s tests of four different types of jailbreaks—from linguistic ones to code-based tricks—DeepSeek’s restrictions could easily be bypassed.
“Every single method worked flawlessly,” Polyakov says. “What’s even more alarming is that these aren’t novel ‘zero-day’ jailbreaks—many have been publicly known for years,” he says, claiming he saw the model go into more depth with some instructions around psychedelics than he had seen any other model create.
“DeepSeek is just another example of how every model can be broken—it’s just a matter of how much effort you put in. Some attacks might get patched, but the attack surface is infinite,” Polyakov adds. “If you’re not continuously red-teaming your AI, you’re already compromised.”
57 notes
·
View notes
Text
Fine-Tuning Your First Large Language Model (LLM) with PyTorch and Hugging Face
1 note
·
View note
Text
The foolish LLMs and GPTs are a million years behind my techniques. The idea of Artificial Intelligence replacing the creative mind is utterly worthless. I have begun harnessing the power of the Ape Mind in my Typewriter Factory to produce 500 new plays from William Shakespeare. Even the least clever and capable of my Bonobos could only ever harm their mother earth by way of Partaking in Banana. Within the past 5 Months; My apes have been guiding my self driving car for me and generating Financial Advice by sheer stroke of luck. While they kill the conscious mind of the noble ape with Neuralink Chip, i have stimulated the unconscious mind of my 'Emotionlink' Chimps with little more than Old typewriter and Banana. They have taught me so much about the world
38 notes
·
View notes
Text
Navigation with Large Language Models: Discussion and References
Subscribe .t1f01df44-f2da-4be0-b7c8-94943f1b14e8 { color: #fff; background: #222; border: 1px solid transparent; border-radius: undefinedpx; padding: 8px 21px; } .t1f01df44-f2da-4be0-b7c8-94943f1b14e8.place-top { margin-top: -10px; } .t1f01df44-f2da-4be0-b7c8-94943f1b14e8.place-top::before { content: “”; background-color: inherit; position: absolute; z-index: 2; width: 20px; height: 12px; }…

View On WordPress
#goal-directed-exploration#language-frontier-guide#large-language-models#llm-heuristics#navigation-with-llm#polling-llms#scoring-subgoals#semantic-scene-understanding
0 notes